Ethics and the implementation of AI in healthcare

Adam La Caze
The University of Queensland

8 December 2025

What is AI?

AI is a field focused on developing systems that simulate human or ideal rationality as assessed against the process or outcome of thinking

(Bringsjord and Govindarajulu 2024; Russell and Norvig 2022)

How should we think about AI? (Bender and Hanna 2025)

  • AI, artificial intelligence is functioning as a marketing term

  • AI is neither artificial nor intelligent

  • There are social, political and commercial factors influencing the development of AI and the discussions we have about it

America’s AI Action Plan

Led by the Department of Commerce (DOC) through the National Institute of Standards and Technology (NIST), revise the NIST AI Risk Management Framework to eliminate references to misinformation, Diversity, Equity, and Inclusion, and climate change

(Executive Office of the President of the United States 2025, 4)

How should we think about AI?

  • Compare other examples of technological impact on healthcare: camera, pharmacogenomics
Gartner's Hype Cycle

Gartner’s Hype Cycle

Ethical project

How can we determine that specific AI tools/approaches work?

How do we ensure implementation of AI in health such that the benefits outweigh the harms for as many populations as possible?

The reference class problem

If we are asked to find the probability holding for an individual future event, we must first incorporate the case in a suitable reference class. An individual thing or event may be incorporated in many reference classes, from which different probabilities will result. This ambiguity has been called the problem of the reference class.

Reichenbach (1949)

No free lunch theorem (Wolpert 1996; Sterkenburg and Grünwald 2021)

A learning algorithm that performs well on some kinds of problems must perform worse on others; averaged over all possible data-generating functions, every learning algorithm has exactly the same expected performance.

Generalizability in clinical prediction models (Chekroud et al. 2024)

Algorithmic bias (Obermeyer et al. 2019)

SALIENT (van der Vegt et al. 2023, 2024)

What does this mean for practitioners?

  1. Recognise we are in a hype-cycle in relation to AI

  2. Develop critical appraisal skills in relation to the implementation of AI tools in practice

  3. Participate in discussions regarding what we value in relation to healthcare and technological progress in healthcare

References

Bender, Emily M, and Alex Hanna. 2025. The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want. Random House.
Bringsjord, Selmer, and Naveen Sundar Govindarajulu. 2024. “Artificial Intelligence.” In The Stanford Encyclopedia of Philosophy, edited by Edward N. Zalta and Uri Nodelman, Fall 2024. Metaphysics Research Lab, Stanford University. https://plato.stanford.edu/archives/fall2024/entries/artificial-intelligence/.
Chekroud, Adam M., Matt Hawrilenko, Hieronimus Loho, Julia Bondar, Ralitza Gueorguieva, Alkomiet Hasan, Joseph Kambeitz, et al. 2024. “Illusory Generalizability of Clinical Prediction Models.” Science 383 (6679): 164–67. https://doi.org/10.1126/science.adg8538.
Executive Office of the President of the United States. 2025. “America’s AI Action Plan.” US Government.
Obermeyer, Ziad, Brian Powers, Christine Vogeli, and Sendhil Mullainathan. 2019. “Dissecting Racial Bias in an Algorithm Used to Manage the Health of Populations.” Science 366 (6464): 447–53. https://doi.org/10.1126/science.aax2342.
Reichenbach, H. 1949. The Theory of Probability. Univ of California Pr. University of California Press. http://www.ucpress.edu/op.php?isbn=9780520019294.
Russell, Stuart J., and Peter Norvig. 2022. “Artificial intelligence : a modern approach.” Harlow, England: Pearson Education Limited.
Sterkenburg, Tom F., and Peter D. Grünwald. 2021. “The No-Free-Lunch Theorems of Supervised Learning.” Synthese 199 (3–4): 9979–10015. https://doi.org/10.1007/s11229-021-03233-1.
Vegt, Anton H van der, Victoria Campbell, Imogen Mitchell, James Malycha, Joanna Simpson, Tracy Flenady, Arthas Flabouris, et al. 2024. “Systematic Review and Longitudinal Analysis of Implementing Artificial Intelligence to Predict Clinical Deterioration in Adult Hospitals: What Is Known and What Remains Uncertain.” Journal of the American Medical Informatics Association 31 (2): 509–24. https://doi.org/10.1093/jamia/ocad220.
Vegt, Anton H van der, Ian A Scott, Krishna Dermawan, Rudolf J Schnetler, Vikrant R Kalke, and Paul J Lane. 2023. “Implementation Frameworks for End-to-End Clinical AI: Derivation of the SALIENT Framework.” Journal of the American Medical Informatics Association : JAMIA 30 (9): 1503–15. https://doi.org/10.1093/jamia/ocad088.
Wolpert, David H. 1996. “The Lack of A Priori Distinctions Between Learning Algorithms.” Neural Computation 8 (7): 1341–90. https://doi.org/10.1162/neco.1996.8.7.1341.